Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Hub backup to deal with hub failure in hub and spoke network
HU Jingjing, HUANG Youfang
Journal of Computer Applications    2018, 38 (6): 1814-1819.   DOI: 10.11772/j.issn.1001-9081.2017102564
Abstract367)      PDF (941KB)(309)       Save
In order to improve the reliability of a hub and spoke network and maintain the normal operation of the hub and spoke network during the failure of the initial hub, a new hub backup optimization method for the hub and spoke network was proposed, in which a backup hub was selected for each hub point to make the initial cost and the backup cost of the hub and spoke network the best. Firstly, the hub backup variables were introduced into the basic model of a hub and spoke network, and an extension model of nonlinear programming was established. The extended model was linearized by the linearization method of variable substitution, and mathematical solver CPLEX was used to solve the small scale problem of the hub and spoke network hub backup. Then, the scale of hub and spoke network nodes was increased, and a genetic algorithm was designed to solve the problem of large scale hub backup optimization in the hub and spoke network. Finally, in the CPLEX and genetic algorithm, the proportion weights of the initial hub and spoke network cost and backup cost were adjusted, the exact solutions and optimal solutions of initial cost, backup cost, hub location and backup hub were obtained respectively. The optimal values of the initial hub and spoke network, backup hub as well as the objective function were obtained by the example experiments. The experimental results show that, the backup hub of the the proposed method shares the traffic and capacity of the initial hub, and when the initial hub fails, the backup hub can undertake the transportation task of the initial hub and keep the hub and spoke network running. The proposed optimization method of hub backup can be applied to the emergency logistics and security management of logistics network.
Reference | Related Articles | Metrics
Proportional fairness and maximum weighted sum-rate in D2D communications underlaying cellular networks
HU Jing, ZHENG Wu
Journal of Computer Applications    2017, 37 (5): 1321-1325.   DOI: 10.11772/j.issn.1001-9081.2017.05.1321
Abstract702)      PDF (752KB)(606)       Save
In order to solve the problem of user's fairness in D2D (Device-to-Device) communication system, firstly, the existing proportional fairness principle was extended to derive an optimization problem relating to weighted sum-rate, and then a KMPF (Kuhn-Munkras Proportional Fair) resource allocation algorithm was proposed to optimize it. The algorithm maximized the user's weighted sum-rate through power control, and allocated the cellular user's resources that could be reused for the D2D users according to maximization of the total weighted sum-rate by Kuhn-Munkras (KM) algorithm. Simulation results show that the fairness index of the proposed algorithm is 0.4 higher than that of the greedy resource allocation algorithm and the throughput of the system is over 95% of its level, and the throughput of proposed algorithm is about 50% higher than that of the random resource allocation algorithms. It is shown that the algorithm can solve the problem of user's fairness while considering the system throughput.
Reference | Related Articles | Metrics
Scene classification based details preserving histogram equalization
HU Jing MA Xiaofeng SHENG Weixing HAN Yubing
Journal of Computer Applications    2014, 34 (7): 2001-2004.   DOI: 10.11772/j.issn.1001-9081.2014.07.2001
Abstract128)      PDF (770KB)(378)       Save

Due to the swallow and over-enhancement problems of traditional histogram equalization, an improved histogram equalization algorithm combining scene classification and details preservation was proposed. In this algorithm, images were classified according to their histogram features. The parameter of piecewise histogram equalization was optimized according to the scene classification and the characteristics of image histogram. The complexity of the improved algorithm is only O(L).L is the level of image grayscale, and equals to 256 here. The improved algorithm has the small amount of computation and solves the swallow and over-enhancement problems of traditional histogram equalization. The results from TI (Texas Instruments) DM648 platform show the algorithm can be used for real-time video image enhancement.

Reference | Related Articles | Metrics
Improved Gaussian mixture model and shadow elimination method
CHEN Lei ZHANG Rongguo HU Jing LIU Kun
Journal of Computer Applications    2013, 33 (05): 1394-1400.   DOI: 10.3724/SP.J.1087.2013.01394
Abstract838)      PDF (768KB)(485)       Save
To reduce the computation of Gauss mixture model effectively and improve the accuracy of shadow elimination in moving object detection, an algorithm which updated the model selectively and eliminated the shadow by the change of brightness was proposed. Firstly, the weight of the Gauss distribution and the rate of those that did not belong to the background were compared before updating the Gauss distribution, if the former was larger, then did not update it, otherwise, updated it; Secondly, the range of brightness change was chosen to be a threshold factor of shadow detection, so that the threshold could be adjusted adaptively according to the change of brightness. Finally, compared this algorithm with the traditional ones through experiments on indoor and outdoor videos, the experimental results show that the time consumption of the algorithm is about one-third of the traditional ones, the accuracy of shadow eliminating is improved and the efficiency of the algorithm is confirmed.
Reference | Related Articles | Metrics
A self-adaptive approach for information integration
CHENG Guo-da,ZOU Ya-hui,ZHU Jing
Journal of Computer Applications    2005, 25 (03): 666-669.   DOI: 10.3724/SP.J.1087.2005.0666
Abstract938)      PDF (179KB)(968)       Save
Detecting records that are approximate duplicates, but not exact duplicates, is one of the key tasks in information integration. Although various algorithms have been presented for detecting duplicated records, strings matching is essential to those algorithms. In self- adaptive information integration algorithm presented by this paper, the hybrid similarity, a comprehensive edit distance and token metric, was used to measure the similar degree between strings. In order to avoid mismatching because of different expressions, the strings in records were partitioned into vocabularies, then were sorted according to their first character. In the process of vocabularies matching, misspellings and abbreviations can be tolerated. The experimental results demonstrate that the self-adaptive approach for information integration achieves higher accuracy than that using Smith-Waterman edit distance and Jaro distance.
Related Articles | Metrics
Agent model for hyperparameter self-optimization of deep classification model
ZHANG Rui, PAN Junming, BAI Xiaolu, HU Jing, ZHANG Rongguo, ZHANG Pengyun
Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2023091313
Online available: 01 April 2024

Gait recognition method based on deep learning
HU Jingwen,LI Xiaokun,CHEN Hongxu,XU Qincheng,HUANG Yiqun,LIN Yi
Journal of Computer Applications    DOI: 10.11772/j.issn.1001-9081.2019081504
Accepted: 03 September 2019